Goto

Collaborating Authors

 player skill



BiRating -- Iterative averaging on a bipartite graph of Beat Saber scores, player skills, and map difficulties

Casanova, Juan

arXiv.org Artificial Intelligence

Difficulty estimation of Beat Saber maps is an interesting data analysis problem and valuable to the Beat Saber competitive scene. We present a simple algorithm that iteratively averages player skill and map difficulty estimations in a bipartite graph of players and maps, connected by scores, using scores only as input. This approach simultaneously estimates player skills and map difficulties, exploiting each of them to improve the estimation of the other, exploitng the relation of multiple scores by different players on the same map, or on different maps by the same player. While we have been unable to prove or characterize theoretical convergence, the implementation exhibits convergent behaviour to low estimation error in all instances, producing accurate results. An informal qualitative evaluation involving experienced Beat Saber community members was carried out, comparing the difficulty estimations output by our algorithm with their personal perspectives on the difficulties of different maps. There was a significant alignment with player perceived perceptions of difficulty and with other existing methods for estimating difficulty. Our approach showed significant improvement over existing methods in certain known problematic maps that are not typically accurately estimated, but also produces problematic estimations for certain families of maps where the assumptions on the meaning of scores were inadequate (e.g. not enough scores, or scores over optimized by players). The algorithm has important limitations, related to data quality and meaningfulness, assumptions on the domain problem, and theoretical convergence of the algorithm. Future work would significantly benefit from a better understanding of adequate ways to quantify map difficulty in Beat Saber, including multidimensionality of skill and difficulty, and the systematic biases present in score data.


Modeling Boundedly Rational Agents with Latent Inference Budgets

Jacob, Athul Paul, Gupta, Abhishek, Andreas, Jacob

arXiv.org Artificial Intelligence

We study the problem of modeling a population of agents pursuing unknown goals subject to unknown computational constraints. In standard models of bounded rationality, sub-optimal decision-making is simulated by adding homoscedastic noise to optimal decisions rather than explicitly simulating constrained inference. In this work, we introduce a latent inference budget model (L-IBM) that models agents' computational constraints explicitly, via a latent variable (inferred jointly with a model of agents' goals) that controls the runtime of an iterative inference algorithm. L-IBMs make it possible to learn agent models using data from diverse populations of suboptimal actors. In three modeling tasks--inferring navigation goals from routes, inferring communicative intents from human utterances, and predicting next moves in human chess games--we show that L-IBMs match or outperform Boltzmann models of decision-making under uncertainty. Inferred inference budgets are themselves meaningful, efficient to compute, and correlated with measures of player skill, partner skill and task difficulty. Building effective models for multi-agent decision-making--whether cooperative or adversarial-- requires understanding other agents' goals and plans.


Exploring Dynamic Difficulty Adjustment in Videogames

Sepulveda, Gabriel K., Besoain, Felipe, Barriga, Nicolas A.

arXiv.org Artificial Intelligence

Videogames are nowadays one of the biggest entertainment industries in the world. Being part of this industry means competing against lots of other companies and developers, thus, making fanbases of vital importance. They are a group of clients that constantly support your company because your video games are fun. Videogames are most entertaining when the difficulty level is a good match for the player's skill, increasing the player engagement. However, not all players are equally proficient, so some kind of difficulty selection is required. In this paper, we will present Dynamic Difficulty Adjustment (DDA), a recently arising research topic, which aims to develop an automated difficulty selection mechanism that keeps the player engaged and properly challenged, neither bored nor overwhelmed. We will present some recent research addressing this issue, as well as an overview of how to implement it. Satisfactorily solving the DDA problem directly affects the player's experience when playing the game, making it of high interest to any game developer, from independent ones, to 100 billion dollar businesses, because of the potential impacts in player retention and monetization.


Monte-Carlo Tree Search for Simulation-based Strategy Analysis

Zook, Alexander, Harrison, Brent, Riedl, Mark O.

arXiv.org Artificial Intelligence

Games are often designed to shape player behavior in a desired way; however, it can be unclear how design decisions affect the space of behaviors in a game. Designers usually explore this space through human playtesting, which can be time-consuming and of limited effectiveness in exhausting the space of possible behaviors. In this paper, we propose the use of automated planning agents to simulate humans of varying skill levels to generate game playthroughs. Metrics can then be gathered from these playthroughs to evaluate the current game design and identify its potential flaws. We demonstrate this technique in two games: the popular word game Scrabble and a collectible card game of our own design named Cardonomicon. Using these case studies, we show how using simulated agents to model humans of varying skill levels allows us to extract metrics to describe game balance (in the case of Scrabble) and highlight potential design flaws (in the case of Cardonomicon).